9 research outputs found

    The Crypto-democracy and the Trustworthy

    Full text link
    In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, ...) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy, in which the roles of institutions are played by universities. This proof-of-concept would have an important impact in demonstrating the possibilities offered by the crypto-democracy paradigm.Comment: DPM 201

    Uniform and Ergodic Sampling in Unstructured Peer-to-Peer Systems with Malicious Nodes

    Get PDF
    ISBN: 978-3-642-17652-4International audienceWe consider the problem of uniform sampling in large scale open systems. Uniform sampling is a fundamental schema that guarantees that any individual in a population has the same probability to be selected as sample. An important issue that seriously hampers the feasibility of uniform sampling in open and large scale systems is the inevitable presence of malicious nodes. In this paper we show that restricting the number of requests that malicious nodes can issue and allowing for a full knowledge of the composition of the system is a necessary and sufficient condition to guarantee uniform and ergodic sampling. In a nutshell, a uniform and ergodic sampling guarantees that any node in the system is equally likely to appear as a sample at any non malicious node in the system and that infinitely often any nodes have a non null probability to appear as a sample at any honest nodes

    Scalable and Secure Aggregation in Distributed Networks

    Full text link
    We consider the problem of computing an aggregation function in a \emph{secure} and \emph{scalable} way. Whereas previous distributed solutions with similar security guarantees have a communication cost of O(n3)O(n^3), we present a distributed protocol that requires only a communication complexity of O(nlog⁥3n)O(n\log^3 n), which we prove is near-optimal. Our protocol ensures perfect security against a computationally-bounded adversary, tolerates (1/2−ϔ)n(1/2-\epsilon)n malicious nodes for any constant 1/2>Ï”>01/2 > \epsilon > 0 (not depending on nn), and outputs the exact value of the aggregated function with high probability

    Scalable and Secure Aggregation in Distributed Networks

    Get PDF
    We consider the problem of computing an aggregation function in a \emph{secure} and \emph{scalable} way. Whereas previous distributed solutions with similar security guarantees have a communication cost of O(n3)O(n^3), we present a distributed protocol that requires only a communication complexity of O(nlog⁥3n)O(n\log^3 n), which we prove is near-optimal. Our protocol ensures perfect security against a computationally-bounded adversary, tolerates (1/2−ϔ)n(1/2-\epsilon)n malicious nodes for any constant 1/2>Ï”>01/2 > \epsilon > 0 (not depending on nn), and outputs the exact value of the aggregated function with high probability

    Privacy-Preserving Boosting

    No full text
    International audienc

    An optimal quantum algorithm to approximate the mean and its application for approximating the median of a set of points over an arbitrary distance

    No full text
    Ten pages, no figures, three algorithmsWe describe two quantum algorithms to approximate the mean value of a black-box function. The first algorithm is novel and asymptotically optimal while the second is a variation on an earlier algorithm due to Aharonov. Both algorithms have their own strengths and caveats and may be relevant in different contexts. We then propose a new algorithm for approximating the median of a set of points over an arbitrary distance function

    Concilier l'équité statistique et la précision en apprentissage machine interprétable grùce à la PLNE

    No full text
    International audienceL'interprĂ©tabilitĂ© et l'Ă©quitĂ© sont deux propriĂ©tĂ©s de plus en plus recherchĂ©es en apprentissage machine. Toutefois, l'apprentissage de modĂšles interprĂ©tables optimaux et respectant des contraintes d'Ă©quitĂ© statistique a Ă©tĂ© identifiĂ© comme l'un des grands dĂ©fis techniques Ă  l'interprĂ©tabilitĂ©. En effet, les contraintes d'Ă©quitĂ© modifient l'espace des solutions admissibles, rendant plus difficile son exploration.FairCORELS est un algorithme d'apprentissage supervisĂ© permettant l'apprentissage de modĂšles de type listes de rĂšgles certifiĂ©s optimaux et respectant des contraintes d'Ă©quitĂ©. FairCORELS est basĂ© sur CORELS, un algorithme de branch-and-bound permettant l'apprentissage de listes de rĂšgles optimales. En raison des contraintes d'Ă©quitĂ© imposĂ©es dans FairCORELS, certaines structures de donnĂ©es utilisĂ©es dans CORELS ne peuvent plus ĂȘtre utilisĂ©es, et l'exploration de l'espace de recherche est alors plus difficile.Nous proposons une approche basĂ©e sur la PLNE tirant profit des contraintes d'Ă©quitĂ©, et de leur interaction avec la prĂ©cision, pour Ă©laguer l'espace de recherche efficacement et guider son exploration. Les expĂ©rimentations menĂ©es montrent que l'approche proposĂ©e accĂ©lĂšre la convergence et permet l'apprentissage de modĂšles Ă©quitables optimaux

    Concilier l'équité statistique et la précision en apprentissage machine interprétable grùce à la PLNE

    No full text
    International audienceL'interprĂ©tabilitĂ© et l'Ă©quitĂ© sont deux propriĂ©tĂ©s de plus en plus recherchĂ©es en apprentissage machine. Toutefois, l'apprentissage de modĂšles interprĂ©tables optimaux et respectant des contraintes d'Ă©quitĂ© statistique a Ă©tĂ© identifiĂ© comme l'un des grands dĂ©fis techniques Ă  l'interprĂ©tabilitĂ©. En effet, les contraintes d'Ă©quitĂ© modifient l'espace des solutions admissibles, rendant plus difficile son exploration.FairCORELS est un algorithme d'apprentissage supervisĂ© permettant l'apprentissage de modĂšles de type listes de rĂšgles certifiĂ©s optimaux et respectant des contraintes d'Ă©quitĂ©. FairCORELS est basĂ© sur CORELS, un algorithme de branch-and-bound permettant l'apprentissage de listes de rĂšgles optimales. En raison des contraintes d'Ă©quitĂ© imposĂ©es dans FairCORELS, certaines structures de donnĂ©es utilisĂ©es dans CORELS ne peuvent plus ĂȘtre utilisĂ©es, et l'exploration de l'espace de recherche est alors plus difficile.Nous proposons une approche basĂ©e sur la PLNE tirant profit des contraintes d'Ă©quitĂ©, et de leur interaction avec la prĂ©cision, pour Ă©laguer l'espace de recherche efficacement et guider son exploration. Les expĂ©rimentations menĂ©es montrent que l'approche proposĂ©e accĂ©lĂšre la convergence et permet l'apprentissage de modĂšles Ă©quitables optimaux
    corecore